- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0003000001000000
- More
- Availability
-
31
- Author / Contributor
- Filter by Author / Creator
-
-
Dinesha, Ujwal (4)
-
Shakkottai, Srinivas (4)
-
Bharadia, Dinesh (2)
-
Ghosh, Ushasi (2)
-
Wu, Raini (2)
-
Arunachalam, Subrahmanyam (1)
-
Khan, Nouman (1)
-
Ko, Woo-Hyun (1)
-
Ko, Woo‑Hyun (1)
-
Li, Jian (1)
-
Mukherjee, Debajoy (1)
-
Narasimha, Dheeraj (1)
-
Subramanian, Vijay (1)
-
Xiong, Guojun (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Restless multi-armed bandits (RMAB) has been widely used to model constrained sequential decision making problems, where the state of each restless arm evolves according to a Markov chain and each state transition generates a scalar reward. However, the success of RMAB crucially relies on the availability and quality of reward signals. Unfortunately, specifying an exact reward function in practice can be challenging and even infeasible. In this paper, we introduce Pref-RMAB, a new RMAB model in the presence of preference signals, where the decision maker only observes pairwise preference feedback rather than scalar reward from the activated arms at each decision epoch. Preference feedback, however, arguably contains less information than the scalar reward, which makes Pref-RMAB seemingly more difficult. To address this challenge, we present a direct online preference learning (DOPL) algorithm for Pref-RMAB to efficiently explore the unknown environments, adaptively collect preference data in an online manner, and directly leverage the preference feedback for decision-makings. We prove that DOPL yields a sublinear regret. To our best knowledge, this is the first algorithm to ensure $$\tilde{\mathcal{O}}(\sqrt{T\ln T})$$ regret for RMAB with preference feedback. Experimental results further demonstrate the effectiveness of DOPL.more » « lessFree, publicly-accessible full text available April 24, 2026
-
Khan, Nouman; Dinesha, Ujwal; Arunachalam, Subrahmanyam; Narasimha, Dheeraj; Subramanian, Vijay; Shakkottai, Srinivas (, IEEE)
-
Ko, Woo-Hyun; Ghosh, Ushasi; Dinesha, Ujwal; Wu, Raini; Shakkottai, Srinivas; Bharadia, Dinesh (, USENIX Symposium on Networked Systems Design and Implementation (NSDI 24))
-
Ko, Woo‑Hyun; Ghosh, Ushasi; Dinesha, Ujwal; Wu, Raini; Shakkottai, Srinivas; Bharadia, Dinesh (, NSDI'24: 21st USENIX Symposium on Networked Systems Design and Implementation)
An official website of the United States government

Full Text Available